Guild icon
Project Sekai
🔒 UMDCTF 2023 / ✅-ml-elgyems-password
Sutx pinned a message to this channel. 04/28/2023 3:00 PM
Avatar
@Violin wants to collaborate 🤝
Avatar
@fleming wants to collaborate 🤝
Avatar
@rubiya wants to collaborate 🤝
Avatar
@unpickled admin bot wants to collaborate 🤝
Avatar
unpickled admin bot 04/29/2023 3:52 PM
im still supposed to be doing hw, but downloading this stuff we might be able to cheese with gradient descent (and this opinion is not the most well-founded one i just took like a bit of ML a really long time ago) (edited)
Avatar
i recall i see similar chal before
15:53
You should look into how a neural network works at its core. What computations are actually occurring? How can we reverse to get the input?
15:53
hint
Avatar
Avatar
unpickled admin bot
im still supposed to be doing hw, but downloading this stuff we might be able to cheese with gradient descent (and this opinion is not the most well-founded one i just took like a bit of ML a really long time ago) (edited)
unpickled admin bot 04/29/2023 3:56 PM
ok looking through my illegible notes for this, backpropagation might work too
Avatar
Avatar
sahuang
You should look into how a neural network works at its core. What computations are actually occurring? How can we reverse to get the input?
unpickled admin bot 04/29/2023 3:59 PM
i think backpropagation kind loosely matches this hint but idk those are just my 2 ideas
15:59
ok now i go do more hw again (edited)
Avatar
pretty sure its back prop
Avatar
unpickled admin bot 04/29/2023 4:01 PM
hmmm i might have a pytorch impl somewhere in my notes, but not too sure, ill check in a bit then i have to type it out :<, wouldnt count on it tho (edited)
Avatar
Avatar
sahuang
pretty sure its back prop
unpickled admin bot 04/29/2023 5:51 PM
i think its gradient...
17:51
backprop would require changing the model i think
Avatar
unpickled admin bot 04/29/2023 6:12 PM
currently my problem with gradient descent is my values are going to inf/nan, even with really small learning rates, and i cant normalise cuz.. normalising is a non leaf tensor (?????)? (edited)
Avatar
unpickled admin bot 04/29/2023 6:24 PM
smfh outputs were wrong
Avatar
@jayden wants to collaborate 🤝
Avatar
unpickled admin bot 04/29/2023 6:38 PM
for i in range(10000): output = model(input_tensor) loss = lossfunc(output, target_output) loss.backward() optimizer.step() optimizer.zero_grad() if torch.allclose(target_output, output, rtol=1e-10, atol=1e-10): break print("Input:", tensor_to_ascii(input_tensor), "Loss: ", loss.item()) rn i get it going to loss of inf (edited)
18:38
which is weird
18:39
like wtf
18:40
(loss inf will start turning values into inf and eventually result in a loss of nan, which will turn values into nan, so not really nice)
Avatar
from what i heard precision of this chal is very bad
Avatar
unpickled admin bot 04/29/2023 6:41 PM
normally youre supposed to threshold and after a while say close enough (edited)
18:41
but i set it to just keep going till its exactly equal
Avatar
unpickled admin bot 04/29/2023 6:52 PM
ah
18:52
im dumb
18:52
need to use a diff optimiser
18:52
RMSprop prob works better here
18:53
ok its so much better now!
18:55
Ok this is working
18:55
@sahuang i got a decently precise input
18:55
how percise should i go
Avatar
Avatar
unpickled admin bot
for i in range(10000): output = model(input_tensor) loss = lossfunc(output, target_output) loss.backward() optimizer.step() optimizer.zero_grad() if torch.allclose(target_output, output, rtol=1e-10, atol=1e-10): break print("Input:", tensor_to_ascii(input_tensor), "Loss: ", loss.item()) rn i get it going to loss of inf (edited)
unpickled admin bot 04/29/2023 6:55 PM
this check passes
18:55
welp
18:55
its working
18:56
how he didnt solve with that
Avatar
unpickled admin bot 04/29/2023 6:56 PM
idfk
18:56
lmao
18:56
current loss btw
18:59
908 :<
18:59
i need it much lower
Avatar
ok i think he is referring to the fact that nn.sequential is reversible
19:01
whats the activation?
Avatar
Avatar
sahuang
whats the activation?
unpickled admin bot 04/29/2023 7:02 PM
?
19:02
got no clue what that is
Avatar
unpickled admin bot 04/29/2023 7:03 PM
ye idk i assume default....?
Avatar
they dont have activation function
19:06
all nn.Linear
Avatar
unpickled admin bot 04/29/2023 7:06 PM
oh
Avatar
what did you do to load model.txt?
Avatar
Original message was deleted or could not be loaded.
unpickled admin bot 04/29/2023 7:07 PM
.
19:07
creates a model with the weights and biases
Avatar
oh ok
19:10
so if i understand correctly The core of the chal is just model = nn.Sequential( nn.Linear(22, 69), nn.Linear(69, 420), nn.Linear(420, 800), nn.Linear(800, 85), nn.Linear(85, 13), nn.Linear(13, 37) ) now all layers bias/weight are given. we give you output and you need to recover input
19:10
input is length 22 of ascii chars, output is length 37 and is given
19:10
is that right
Avatar
unpickled admin bot 04/29/2023 7:11 PM
yep
Avatar
i asked my ML friend and he said its directly recoverable
Avatar
unpickled admin bot 04/29/2023 7:11 PM
given its linear, prob
19:11
im just lazy
Avatar
yeah
19:12
linear functions
19:12
each layer is y=Wx+b and y W b are all known
19:12
right
Avatar
unpickled admin bot 04/29/2023 7:12 PM
yep
19:12
:thonk: wait what if we just
19:12
make all the biases negativre
19:12
and invert W
Avatar
yeah
19:13
its LA stuff i think
Avatar
unpickled admin bot 04/29/2023 7:13 PM
and obv reverse the order of the model
Avatar
yeah
19:13
thats the plan
19:13
do it reverse order
Avatar
unpickled admin bot 04/29/2023 7:13 PM
hmmm i can prob throw that together rq
Avatar
i think precision loss comes from matrix inverse
Avatar
unpickled admin bot 04/29/2023 7:14 PM
weights is a matrix, ye
Avatar
unpickled admin bot 04/29/2023 7:14 PM
hmmmmmmm
19:14
could we couple it??
19:14
i.e
19:14
matrix inverse
19:14
guess input vector
19:14
then use gradient descent to get closer?
Avatar
if we have values why do we need gradient descent
Avatar
unpickled admin bot 04/29/2023 7:16 PM
would they be exact though?
19:16
i thought there was precision loss issues
Avatar
if i understand correctly, for example last layer nn.Linear(13, 37) we have y = W * x + b where b is size 37x1 W is size 37x13 y is size 37x1 (the output) we need to recover x which is 13x1
19:17
(just trying to make sure we are on same page)
Avatar
Avatar
unpickled admin bot
i thought there was precision loss issues
o shit what if we use c++
19:18
lmao
Avatar
unpickled admin bot 04/29/2023 7:18 PM
i mean
19:19
thats going to be really painful to write
Avatar
there's code to copy
Avatar
Avatar
sahuang
(just trying to make sure we are on same page)
unpickled admin bot 04/29/2023 7:19 PM
ye we are
19:19
true
Avatar
so basically W^-1 * (y-b)?
Avatar
unpickled admin bot 04/29/2023 7:19 PM
ye
Avatar
unpickled admin bot 04/29/2023 7:20 PM
:thonk: i prob shouldve paid more attention in the ml course i took lmao (edited)
Avatar
thats why author said he didnt use any of gradient or back prop
19:20
maybe directly reversed
19:21
ok will write a program to try
19:21
tbf its better to
19:21
use fractions
Avatar
unpickled admin bot 04/29/2023 7:21 PM
ye cuz no precision issues?
19:22
btw def tensor_to_ascii(t): data = "" for x in t: try: data += chr(int(x.item())) except: data += "\x00" return data (edited)
Avatar
my friend told me just cast all values to fp64 then calculate
19:23
should be ok
19:23
torch.float64
19:23
right
Avatar
unpickled admin bot 04/29/2023 7:24 PM
idk
19:24
64 bits of precision is prob enough in life but
Avatar
Avatar
unpickled admin bot
btw def tensor_to_ascii(t): data = "" for x in t: try: data += chr(int(x.item())) except: data += "\x00" return data (edited)
whats this
19:24
for final?
Avatar
unpickled admin bot 04/29/2023 7:24 PM
takes the input tensor
19:24
ye
Avatar
unpickled admin bot 04/29/2023 7:24 PM
and converts it to ascii/flag
19:27
oh shit
19:28
we can actually
19:28
first multiply all W/b/y to magnitude of 1 then later divide back
Avatar
uh how do you convert xxx/100...00 to the double
Avatar
unpickled admin bot 04/29/2023 7:37 PM
oh i uh
19:37
literally just eval it
19:37
lmao
Avatar
is there precision loss?
Avatar
unpickled admin bot 04/29/2023 7:38 PM
idt there should be?
19:38
idk
19:39
moving script herre, too big for 1 msg so gotta break into 2
19:39
output = open("model (1).txt").read() outputs = open("outputs (2).txt").read() data = output.split("layer") import torch import numpy as np def rm_empty(l): if '' in l: l.remove('') return l def tensor_to_ascii(t): data = "" for x in t: try: data += chr(int(x.item())) except: data += "\x00" return data data = rm_empty(data) from torch import tensor def parse_data(d): dn = rm_empty(d.split("\n")) lr = dn[0] d2 = ("\n".join(dn[1:])).split("weights") d2 = rm_empty(d2) biases = d2[0] biases = rm_empty(biases.split("\n"))[1:][0] biases = tensor(eval('[np.float64(' + biases.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]'), dtype=torch.float64) weights = rm_empty(d2[1].split("\n")) for i in range(len(weights)): weights[i] = eval('[np.float64(' + weights[i].replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]') return lr, biases, tensor(weights, dtype=torch.float64) weights = [] biases = [] for x in range(len(data)): _, lrbiases, lrweights = parse_data(data[x]) weights.append(lrweights) biases.append(lrbiases) from torch import nn model = nn.Sequential( nn.Linear(22, 69), nn.Linear(69, 420), nn.Linear(420, 800), nn.Linear(800, 85), nn.Linear(85, 13), nn.Linear(13, 37) ) def tensor_to_ascii(t): data = "" for x in t: try: data += chr(round(x.item())) except: data += "\x00" return data for i, layer in enumerate(model): if isinstance(layer, nn.Linear): layer.weight = nn.Parameter(weights[i]) layer.biases = nn.Parameter(biases[i]) target_output = tensor(eval('[np.float64(' + outputs.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]'), dtype=torch.float64) (edited)
19:39
prefix = "UMDCTF{" suffix = "}" input_tensor = torch.zeros(22, dtype=torch.float64) for i in range(len(prefix)): input_tensor[i] = ord(prefix[i]) input_tensor[-1] = ord(suffix) for i in range(len(prefix), len(input_tensor) - 1): input_tensor[i] = torch.randint(low=32, high=127, size=(1,), dtype=torch.float64) input_tensor.requires_grad = True MSE = nn.MSELoss() from torch.optim import * for x in range(0,10): optimizer = RMSprop([input_tensor], lr=0.1**x) for i in range(1000): print(loss) output = model(input_tensor) loss = MSE(output, target_output) loss.backward() optimizer.step() optimizer.zero_grad() if torch.allclose(output, target_output, rtol=1e-10, atol=1e-10): break print(tensor_to_ascii(input_tensor)) (edited)
Avatar
I have an n*n matrix like so: [ [Fraction(1, 1), Fraction(-1, 2)], [Fraction(-4, 9), Fraction(1, 1)] ] And I want to find its inverse. Since it has Fractions in it, doing: from numpy.linalg
19:39
i wanna do this
19:39
use Fraction
19:39
lol
19:39
in this way all are ints
19:41
this wont work
Avatar
unpickled admin bot 04/29/2023 7:41 PM
ah
19:41
uhhhhh i can prob do a decently contrived replace
Avatar
wait another issue is W is not even square how do we inverse it
Avatar
unpickled admin bot 04/29/2023 7:44 PM
idk
Avatar
had to solve linear equations
Avatar
unpickled admin bot 04/29/2023 7:47 PM
?
Avatar
like you cant invery then multiply
19:47
from what i saw
Avatar
unpickled admin bot 04/29/2023 7:47 PM
ye since mat isnt square (edited)
Avatar
[y1 ... y37] = [37x13] * [x1 ... x13] so yeah can only solve 13 equations for x
Avatar
Avatar
sahuang
use Fraction
unpickled admin bot 04/29/2023 7:53 PM
output = open("model (1).txt").read() outputs = open("outputs (2).txt").read() def rm_empty(l): if '' in l: l.remove('') return l from fractions import Fraction def parse_data(d): dn = rm_empty(d.split("\n")) lr = dn[0] d2 = ("\n".join(dn[1:])).split("weights") d2 = rm_empty(d2) biases = d2[0] biases = rm_empty(biases.split("\n"))[1:][0] biases = eval('[Fraction("' + biases.replace(" ", '"), Fraction("') + '")]') weights = rm_empty(d2[1].split("\n")) for i in range(len(weights)): weights[i] = eval('[Fraction("' + weights[i].replace(" ", '"), Fraction("') + '")]') return lr, biases, weights weights = [] biases = [] for x in range(len(data)): _, lrbiases, lrweights = parse_data(data[x]) weights.append(lrweights) biases.append(lrbiases) target_output = eval('[Fraction("' + outputs.replace(" ", '"), Fraction("') + '")]')
19:53
you cant tensor it because fractions are an unknown dtype
19:53
hence you cant "load it into" the model
Avatar
unpickled admin bot 04/29/2023 8:00 PM
hmmmmmmmmmmmmmmm
20:00
waaaaait it just checks dtype right
20:00
doesnt np have some
20:00
really really precise integer stuff
Avatar
Avatar
sahuang
torch.float64
unpickled admin bot 04/29/2023 8:01 PM
imma try my script with this
20:01
but
20:02
you said you wanted in fractions to do matrix stuff
20:02
so
Avatar
i think fraction can work but not too sure, gonna test it out
20:02
float64 should also work
20:02
in theory
Avatar
unpickled admin bot 04/29/2023 8:02 PM
my script is a bit diff....................
20:02
its more of a
20:02
very guided brute force through "optimising" the input
Avatar
@Utaha wants to collaborate 🤝
Avatar
unpickled admin bot 04/29/2023 8:02 PM
oh hi
Avatar
I thought we can always resort to gmpy2 when there's a precision issue
Avatar
basically they give numbers such as 8518196211434573697728037/100000000000000000 as string
20:04
and we have [y1 ... y37] = [37x13] * [x1 ... x13] need to solve for x given y
20:04
37 and 13 are different in each layer, we do it 6 times
20:04
so each time we hope to see perfect precision (at least Fraction could achieve it imo) also we can multiply everything by 1000 before calculation then divide by 1000
20:11
when integer overflow will happen? what if i just multiply everything by 10^17?
Avatar
unpickled admin bot 04/29/2023 8:11 PM
in py?
20:11
py doesnt int overflow?
Avatar
maybe worth a try (result will be double though) i will try fraction first
Avatar
unpickled admin bot 04/29/2023 8:14 PM
wtf
20:15
errrrrrr
Avatar
Avatar
unpickled admin bot
prefix = "UMDCTF{" suffix = "}" input_tensor = torch.zeros(22, dtype=torch.float64) for i in range(len(prefix)): input_tensor[i] = ord(prefix[i]) input_tensor[-1] = ord(suffix) for i in range(len(prefix), len(input_tensor) - 1): input_tensor[i] = torch.randint(low=32, high=127, size=(1,), dtype=torch.float64) input_tensor.requires_grad = True MSE = nn.MSELoss() from torch.optim import * for x in range(0,10): optimizer = RMSprop([input_tensor], lr=0.1**x) for i in range(1000): print(loss) output = model(input_tensor) loss = MSE(output, target_output) loss.backward() optimizer.step() optimizer.zero_grad() if torch.allclose(output, target_output, rtol=1e-10, atol=1e-10): break print(tensor_to_ascii(input_tensor)) (edited)
unpickled admin bot 04/29/2023 8:38 PM
got a loss of 4.9
20:38
lmao
20:38
this is hardge
20:39
loss of 1.2
20:39
bro how the fuck is this wrong
Avatar
1.2 is very low
Avatar
unpickled admin bot 04/29/2023 8:39 PM
i think i have an unintended lmao
Avatar
but you didnt get input?
Avatar
unpickled admin bot 04/29/2023 8:39 PM
i have input
20:39
input is not flag
Avatar
unpickled admin bot 04/29/2023 8:40 PM
but it matches almost exactly with output
20:40
collision? (edited)
Avatar
maybe
Avatar
unpickled admin bot 04/29/2023 8:40 PM
tensor([ 74.8768, 91.2488, 30.9799, -82.4857, -79.5941, 50.4259, 92.7775, 33.6500, -21.4305, 94.0301, 84.4334, 91.1794, 54.6718, 28.5342, 31.8013, 12.2619, 25.8559, -42.9601, 88.9332, 28.9249, 82.2870, 33.0996], dtype=torch.float64, requires_grad=True) my tensor btw
20:43
ye i can reliably make tensors like this
20:43
lmaoooo
20:44
at that percision (1.2), obv, input tensors dont change much
20:44
so
20:45
aaaaaaa
Avatar
Avatar
sahuang
maybe
unpickled admin bot 04/29/2023 8:45 PM
ya collisions
20:45
i hate ml alr
20:45
lmao
20:45
20:45
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Avatar
correct way is to calculate directly i guess, training will give wrong input
Avatar
unpickled admin bot 04/29/2023 8:49 PM
i mean he says it is working
Avatar
unpickled admin bot 04/29/2023 8:49 PM
problem is theres too many inputs that resolves to 1 output
Avatar
yeah
Avatar
unpickled admin bot 04/29/2023 8:49 PM
so like hash collisions
20:49
so theres a loss function used in the training
20:49
which we can rebuild to account for the flag format
20:49
and then hopefully it works
Avatar
unpickled admin bot 04/29/2023 9:38 PM
@sahuang i need to log off for today, feel free to modify giant block of code above if you want to work on my solve path, or work on the direct rev part idk really (edited)
Avatar
sure
21:38
i will do this tmr ig, no time
Avatar
unpickled admin bot 04/29/2023 9:39 PM
kkty
21:39
but the chall author believes this is workable
21:40
the last thing we would really need is probably just a custom loss function
Avatar
Avatar
unpickled admin bot
output = open("model (1).txt").read() outputs = open("outputs (2).txt").read() data = output.split("layer") import torch import numpy as np def rm_empty(l): if '' in l: l.remove('') return l def tensor_to_ascii(t): data = "" for x in t: try: data += chr(int(x.item())) except: data += "\x00" return data data = rm_empty(data) from torch import tensor def parse_data(d): dn = rm_empty(d.split("\n")) lr = dn[0] d2 = ("\n".join(dn[1:])).split("weights") d2 = rm_empty(d2) biases = d2[0] biases = rm_empty(biases.split("\n"))[1:][0] biases = tensor(eval('[np.float64(' + biases.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]'), dtype=torch.float64) weights = rm_empty(d2[1].split("\n")) for i in range(len(weights)): weights[i] = eval('[np.float64(' + weights[i].replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]') return lr, biases, tensor(weights, dtype=torch.float64) weights = [] biases = [] for x in range(len(data)): _, lrbiases, lrweights = parse_data(data[x]) weights.append(lrweights) biases.append(lrbiases) from torch import nn model = nn.Sequential( nn.Linear(22, 69), nn.Linear(69, 420), nn.Linear(420, 800), nn.Linear(800, 85), nn.Linear(85, 13), nn.Linear(13, 37) ) def tensor_to_ascii(t): data = "" for x in t: try: data += chr(round(x.item())) except: data += "\x00" return data for i, layer in enumerate(model): if isinstance(layer, nn.Linear): layer.weight = nn.Parameter(weights[i]) layer.biases = nn.Parameter(biases[i]) target_output = tensor(eval('[np.float64(' + outputs.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]'), dtype=torch.float64) (edited)
unpickled admin bot 04/29/2023 9:56 PM
bug in this code btw so what we want to do is create a custom_loss function that includes weightages for the flag format since i use rmsprop here, we can essentially just scale the flag format factor to be really high (but not insanely high*) to familiarise yourself with the idea i used here, its just gradient descent (reason for this is cuz it returns "collisions") (edited)
Avatar
unpickled admin bot 04/29/2023 10:21 PM
@sahuang neil says you are ml god so uh rq.... how would you approach (not asking you to make just how to do) the custom loss function
Avatar
lmao wtf, idk much ML
22:21
hfz does
Avatar
unpickled admin bot 04/29/2023 10:21 PM
oh
22:21
can we get hfz in here (edited)
Avatar
@hfz wants to collaborate 🤝
Avatar
unpickled admin bot 04/29/2023 10:24 PM
Hi!
22:24
ok so
Avatar
unpickled admin bot 04/29/2023 10:24 PM
rn im cheesing this with some gradient descent
22:24
cuz the intended scares the shit out of me
22:24
the author confirmed this works
22:25
rn i can generate inputs that match the output
22:25
just not the right one
22:25
so the logical next step, confirmed by chall author (tho he doesnt know 100% cuz this is intended),
Avatar
Avatar
sahuang
so if i understand correctly The core of the chal is just model = nn.Sequential( nn.Linear(22, 69), nn.Linear(69, 420), nn.Linear(420, 800), nn.Linear(800, 85), nn.Linear(85, 13), nn.Linear(13, 37) ) now all layers bias/weight are given. we give you output and you need to recover input
challenge is this btw, tldr @hfz
Avatar
@Legoclones wants to collaborate 🤝
Avatar
unpickled admin bot 04/29/2023 10:25 PM
is to modify the loss function in order to get the loss to be higher if it does not match the flag format
22:26
problem is apparently i suck at coding and ML so i got 0 clue how to do that!
Avatar
intended is to solve linear equations y=Wx+b knowning y,W,b. which 100% works but need some coding on precison saving
Avatar
unpickled admin bot 04/29/2023 10:26 PM
(which also hurts pretty much)
22:26
wait
22:26
wat
22:26
@sahuang you can just
22:27
use numpy to store all the values
22:27
thats more than precise enough f64s
22:27
then use tensor.float64 as the dtype...... (edited)
Avatar
Avatar
sahuang
intended is to solve linear equations y=Wx+b knowning y,W,b. which 100% works but need some coding on precison saving
unpickled admin bot 04/29/2023 10:28 PM
check above, i ran into the precision issue then used that
Avatar
doesnt that run into runtime error
Avatar
unpickled admin bot 04/29/2023 10:28 PM
nope?
22:28
oh
22:29
that
22:29
so you need to modify all tensors
22:29
to be of the same type
22:29
i.e your target_output tensor too (edited)
Avatar
unpickled admin bot 04/29/2023 10:29 PM
hence just append ,dtype=tensor.float64 everywhere
22:29
call it a day
Avatar
the initial number has high precision and i cant get it in array
Avatar
unpickled admin bot 04/29/2023 10:29 PM
i did that above
Avatar
send code?
Avatar
unpickled admin bot 04/29/2023 10:30 PM
output = open("model (1).txt").read() outputs = open("outputs (2).txt").read() data = output.split("layer") import torch import numpy as np def rm_empty(l): if '' in l: l.remove('') return l def tensor_to_ascii(t): data = "" for x in t: try: data += chr(int(x.item())) except: data += "\x00" return data data = rm_empty(data) from torch import tensor def parse_data(d): dn = rm_empty(d.split("\n")) lr = dn[0] d2 = ("\n".join(dn[1:])).split("weights") d2 = rm_empty(d2) biases = d2[0] biases = rm_empty(biases.split("\n"))[1:][0] biases = eval('[np.float64(' + biases.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]') weights = rm_empty(d2[1].split("\n")) for i in range(len(weights)): weights[i] = eval('[np.float64(' + weights[i].replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]') return lr, biases, weights # in my impl this is tensor(weights, dtype=torch.float64 but i think you need to do math on weights so weights = [] biases = [] for x in range(len(data)): _, lrbiases, lrweights = parse_data(data[x]) weights.append(lrweights) biases.append(lrbiases) (edited)
22:31
high precision through np.float64
22:31
then whenever you make it a tensor
22:31
tensor(<varname>, dtype=tensor.float64)
22:31
f64s are as precise as needed for this
22:32
welp imma return to the hell of hw
Avatar
what about if i need a number?
Avatar
never seen a ply file before, how do you read its contents?
Avatar
Avatar
sahuang
what about if i need a number?
unpickled admin bot 04/29/2023 10:33 PM
wdym?
Avatar
double x = y/z where y and z are float64
Avatar
Avatar
hfz
never seen a ply file before, how do you read its contents?
unpickled admin bot 04/29/2023 10:33 PM
what file is a .ply?
Avatar
Avatar
hfz
never seen a ply file before, how do you read its contents?
thats another chall
22:33
not this
Avatar
Avatar
sahuang
double x = y/z where y and z are float64
unpickled admin bot 04/29/2023 10:33 PM
that is these
22:33
lmao
22:33
it essentially does
Avatar
just do y/z?
Avatar
PLY file is #✅-misc-beheeyems-password
Avatar
unpickled admin bot 04/29/2023 10:33 PM
no
22:33
it alr did it
Avatar
no i mean when doing linear system solving i need this
Avatar
unpickled admin bot 04/29/2023 10:34 PM
oh
22:34
just divide
22:34
its all numpyfloats (edited)
Avatar
unpickled admin bot 04/29/2023 10:34 PM
Avatar
how about outputs
22:35
you didnt process it
22:35
lemme try the same
Avatar
unpickled admin bot 04/29/2023 10:35 PM
i did
22:35
second codeblock (edited)
22:36
in the giant script above
22:36
nvm first
Avatar
Avatar
unpickled admin bot
output = open("model (1).txt").read() outputs = open("outputs (2).txt").read() data = output.split("layer") import torch import numpy as np def rm_empty(l): if '' in l: l.remove('') return l def tensor_to_ascii(t): data = "" for x in t: try: data += chr(int(x.item())) except: data += "\x00" return data data = rm_empty(data) from torch import tensor def parse_data(d): dn = rm_empty(d.split("\n")) lr = dn[0] d2 = ("\n".join(dn[1:])).split("weights") d2 = rm_empty(d2) biases = d2[0] biases = rm_empty(biases.split("\n"))[1:][0] biases = tensor(eval('[np.float64(' + biases.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]'), dtype=torch.float64) weights = rm_empty(d2[1].split("\n")) for i in range(len(weights)): weights[i] = eval('[np.float64(' + weights[i].replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]') return lr, biases, tensor(weights, dtype=torch.float64) weights = [] biases = [] for x in range(len(data)): _, lrbiases, lrweights = parse_data(data[x]) weights.append(lrweights) biases.append(lrbiases) from torch import nn model = nn.Sequential( nn.Linear(22, 69), nn.Linear(69, 420), nn.Linear(420, 800), nn.Linear(800, 85), nn.Linear(85, 13), nn.Linear(13, 37) ) def tensor_to_ascii(t): data = "" for x in t: try: data += chr(round(x.item())) except: data += "\x00" return data for i, layer in enumerate(model): if isinstance(layer, nn.Linear): layer.weight = nn.Parameter(weights[i]) layer.biases = nn.Parameter(biases[i]) target_output = tensor(eval('[np.float64(' + outputs.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]'), dtype=torch.float64) (edited)
unpickled admin bot 04/29/2023 10:36 PM
full code
22:36
target_output = tensor(eval('[np.float64(' + outputs.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]'), dtype=torch.float64)
22:36
was second
Avatar
Avatar
unpickled admin bot
target_output = tensor(eval('[np.float64(' + outputs.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]'), dtype=torch.float64)
unpickled admin bot 04/29/2023 10:36 PM
turns into tensor, so just do target_output = eval('[np.float64(' + outputs.replace(" ", '), np.float64(').replace('/', ')/np.float64(') + ')]')
22:37
precision is low though
22:37
if i print it
22:37
need sth like 85181962.11434573697728037
Avatar
unpickled admin bot 04/29/2023 10:38 PM
what value is that meant to be?
22:38
i think it only prints a limited amount??
Avatar
8518196211434573697728037/100000000000000000
22:38
idk
22:38
could be?
Avatar
unpickled admin bot 04/29/2023 10:38 PM
no
22:38
it loses precision
22:38
lmaoo wtf
22:39
i think though
22:39
that level of precision doesnt matter
22:39
the reason i say that is
22:39
cuz i used that above
22:39
and it generated valid collisions
22:39
so
Avatar
I think solving the equations isn't that complicated, we don't even need to rebuild the model
Avatar
we dont
Avatar
just parse the parameters into matrices, then solve layer by layer
Avatar
only issue is precision
22:50
cant load data as float64 for some reason
Avatar
why do you need to worry about precision?
22:50
just keep them as fractions
Avatar
i am doing it
Avatar
you can do that in sympy I think
Avatar
but idk np.liagsolve can work with fraction
Avatar
I'd say matlab is the most suited tool for this, although I never used it
Avatar
hold on
23:00
you have to train it
Avatar
from fractions import Fraction
23:00
Fraction(8518196211434573697728037, 100000000000000000)
23:00
we can do calculations using the fraction objects
Avatar
nn.Linear(85, 13) in this case, y is 13x1, w is 13x85, x is 85x1 this means you have 13 equations with 85 unknowns
23:01
but i dont think its intended, idk
Avatar
we start with the very last layer, we have Y = WX + B
Avatar
W is 37x13
Avatar
i means econd last layer
Avatar
Y, X, and B are 13x1
23:04
ah yeah
Avatar
and a few above
Avatar
unpickled admin bot 04/29/2023 11:04 PM
hfz this is kinda random but do yk how torch.optim.LBFGS works? (edited)
23:08
:sadge:
Avatar
Avatar
unpickled admin bot
hfz this is kinda random but do yk how torch.optim.LBFGS works? (edited)
uh no, never used it
Avatar
unpickled admin bot 04/29/2023 11:14 PM
kk (edited)
Avatar
ok now i have a question
23:20
suppose input is [x1,x2,...,x22] after 6 layers we get [y1,y2,...,y37] where each y is a linear combination of x then just solve this system?
23:20
whats the issue with this
Avatar
calculating X worked fine for last layer
23:20
[85181962.11434537 -82558593.57841527 51320422.15698447 50855552.596899845 102777290.7469408 260244431.67128614 -41910940.30700645 -121562941.17643394 89706297.60350907 99330170.65949443 -36853890.38421978 95177796.51577635 13157343.127865441 122638034.2406461 75223895.80595507 203024016.244885 212145165.20600376 44208997.46421799 58609160.887382485 328611319.2197367 -111984167.25720419 -4469690.991234731 -70135524.92187999 45010210.01148323 30623208.66198535 2558449.492180466 193122704.82683364 203119671.1013074 11450046.25626932 -1756080.7466687919 -318635976.1328053 156107263.70621404 218812721.0969335 -42383267.35600404 -76017188.644868 15657464.098871475 93862598.79557651] [ 8.51819621e+07 -8.25585936e+07 5.13204222e+07 5.08555526e+07 1.02777291e+08 2.60244432e+08 -4.19109403e+07 -1.21562941e+08 8.97062976e+07 9.93301707e+07 -3.68538904e+07 9.51777965e+07 1.31573431e+07 1.22638034e+08 7.52238958e+07 2.03024016e+08 2.12145165e+08 4.42089975e+07 5.86091609e+07 3.28611319e+08 -1.11984167e+08 -4.46969099e+06 -7.01355249e+07 4.50102100e+07 3.06232087e+07 2.55844949e+06 1.93122705e+08 2.03119671e+08 1.14500463e+07 -1.75608075e+06 -3.18635976e+08 1.56107264e+08 2.18812721e+08 -4.23832674e+07 -7.60171886e+07 1.56574641e+07 9.38625988e+07]
Avatar
i just put 6 layers together
Avatar
you stacked all equations?
Avatar
first layer is 69x22 so after that we get [y1,y2,...,y69] where yi is a linear combination of x1-x22 based on first W
23:21
and so on
23:21
symbolic expressions i mean
23:22
another way is to do it layer-by-layer, where each layer we need to train to optimize loss
Avatar
my recovered input doesn't make sense smh
23:39
[-0.3302665578795254 4.088524661835484 1.582225422025971 2.228352830448657 -8.8105937753601 5.844176881744154 -0.6570651370145247 1.3964525161914585 -0.847161076540201 0.6887171027769219 -0.2769034719969953 3.527297226608247 3.5380032762048694 -0.539706423418792 -0.01838310162286505 -4.152757900644751 6.763915750017028 -1.913941071752785 3.226053300276378 1.0636395444207503 6.77621334041599 1.3337224681807407]
Avatar
unpickled admin bot 04/29/2023 11:45 PM
23:45
you did smthng very wrong, no offense but (edited)
23:47
yeah, the Y I get back doesn't make sense
Avatar
unpickled admin bot 04/29/2023 11:47 PM
biiiiiiiiggggg loss (edited)
23:47
@hfz do you know if its possible to do a guided gradient descent?
23:48
i have a not-guided gradient descent that can give me values that resolve to that output
23:48
just not
23:48
the flag value
23:48
so guided-gradient descent would be modifying the loss function safely to encourage UMDCTF{
23:48
but my thing bugs lmao cuz i cant code (edited)
Avatar
how would that help tho? I don't think we need to change the weights
Avatar
ok so the idea is
23:51
combine the 6 layers
23:51
get an equation and solve it
23:52
we have W1-W6 and b1-b6, suppose input is x W6(W5(W4(W3(W2(W1x+b1)+b2)+b3)+b4+b5)+b6=output just expand it, get it to Ax=b then solve
23:53
looks easy?
Avatar
I solved layer by layer, problematic, the precision loss propagates to other layers
23:55
will try everything in one shot
Avatar
Avatar
sahuang
we have W1-W6 and b1-b6, suppose input is x W6(W5(W4(W3(W2(W1x+b1)+b2)+b3)+b4+b5)+b6=output just expand it, get it to Ax=b then solve
just this
23:55
i will get the equation
23:55
by hand ig
23:56
in the end we should get overdetermined equations
Avatar
Avatar
hfz
how would that help tho? I don't think we need to change the weights
unpickled admin bot 04/29/2023 11:56 PM
oh so the idea i was doing
23:56
was
23:56
use a gradient descent and an optimiser
23:57
to calculate an input (edited)
23:57
based on the difference between calculated inputs and the output (edited)
23:57
so that works
23:57
like
23:58
i can calculate an input that matches the output
23:58
just not the flag (edited)
23:58
moreso something that "collides" with the flag
Avatar
W6W5W4W3W2W1x+W6W5W4W3W2b1+W6W5W4W3b2+W6W5W4b3+W6W5b4+W6b5+b6=output
23:59
now we have all except x
23:59
just solve this
23:59
assuming precision is good
Avatar
unpickled admin bot 04/29/2023 11:59 PM
oh (edited)
23:59
i just did
23:59
x = W1^-1 * (W2^-1 * (W3^-1 * (W4^-1 * (W5^-1 * (W6^-1 * output - b6) - b5) - b4) - b3) - b2) - b1 (edited)
23:59
but ok
Avatar
its not square idk how to invert
Avatar
unpickled admin bot 04/29/2023 11:59 PM
can we not invert?
00:00
oh
Avatar
Avatar
sahuang
W6W5W4W3W2W1x+W6W5W4W3W2b1+W6W5W4W3b2+W6W5W4b3+W6W5b4+W6b5+b6=output
you can just use this
Avatar
im able to get both A and B in fraction
00:12
A is 37x22, B is 37x1
00:12
need to solve x of 22x1 where Ax=b
Avatar
unpickled admin bot 04/30/2023 12:18 AM
@sahuang just solve for the left or right inverse as needed
00:18
you dont need the full inverse lmao
00:19
Ax=b for example you just need the left inv
00:20
oh the roomba is beating us
00:20
hmmm
00:21
maybe i should stop chasing the theoretically possible but really hard unintended
Avatar
sympy.matrices.common.NonInvertibleMatrixError: Matrix det == 0; not invertible.
00:22
this is depressing
Avatar
Avatar
hfz
sympy.matrices.common.NonInvertibleMatrixError: Matrix det == 0; not invertible.
unpickled admin bot 04/30/2023 12:22 AM
left inverse
Avatar
cant invert
Avatar
unpickled admin bot 04/30/2023 12:22 AM
just
00:22
take the left inverse
Avatar
you dont need to rev it
Avatar
unpickled admin bot 04/30/2023 12:22 AM
lmao
00:22
you can solve for Ax=b tho
00:22
with the left inverse of A
Avatar
np.linalg.lstsq should do
Avatar
unpickled admin bot 04/30/2023 12:23 AM
what is lstsq
Avatar
nvm it cant
00:23
also im using fraction
Avatar
I'm just using sympy, it knows how to do it
Avatar
this is A
Avatar
it could solve for layer 6, but not for layer 5
00:24
00:24
here's the code if someone wants to tweak
Avatar
this is B
2.19 KB
00:24
Ax=b
Avatar
will prolly go to bed in a moment
Avatar
you can try my data
00:25
i have A and b
00:25
and need x
Avatar
Avatar
sahuang
this is A
unpickled admin bot 04/30/2023 12:25 AM
my computer is refusing to paste this....
00:25
why so long
Avatar
37x22
Avatar
unpickled admin bot 04/30/2023 12:25 AM
ah
00:25
huh?????/
00:26
python didnt like that matrix noted
Avatar
Avatar
hfz
Click to see attachment 🖼️
because layer 5 has more variables than eqns
00:27
in my case it should work, let me try
00:28
I sent the wrong file
Avatar
Avatar
sahuang
this is A
unpickled admin bot 04/30/2023 12:28 AM
what is A/B in terms of weights, bias, output?
Avatar
just Ax=B
00:28
i already calculated everything
Avatar
unpickled admin bot 04/30/2023 12:28 AM
ik
Avatar
you combined everything?
Avatar
Avatar
sahuang
just Ax=B
unpickled admin bot 04/30/2023 12:28 AM
just trying to catch up
00:28
lmao
00:28
the unintended is too scuffed
00:28
ik chall author says its doable but its like
00:28
theoretically it is
00:29
see this
Avatar
unpickled admin bot 04/30/2023 12:29 AM
do either of us know how to? no lmao
00:29
ok ty
Avatar
# solve overdetermined system left * x = right where left is size 37x22 and right is size 37x1
00:29
ignore last line
Avatar
unpickled admin bot 04/30/2023 12:29 AM
gimme a bit my computer needs to achieve overheating
Avatar
not sure if there's bug but i will try solve this and if no result debug tmr with self generated data
Avatar
unpickled admin bot 04/30/2023 12:30 AM
Computes the vector x that approximately solves the equation @sahuang you can compute exact though
Avatar
i know
00:31
where is that reference
00:31
np is wrong i know
Avatar
Avatar
sahuang
where is that reference
unpickled admin bot 04/30/2023 12:32 AM
i checked the docs for np.linalg.lstsq
Avatar
yeah ik
00:32
thats wrong
00:32
thats when n<m
00:32
we have more equations
Avatar
unpickled admin bot 04/30/2023 12:32 AM
oh
00:32
:sadge:
00:32
just left inv for the ones you have though
00:32
is faster computationally/more accurate (edited)
Avatar
idk how to left inv on fraction matrix
Avatar
unpickled admin bot 04/30/2023 12:33 AM
oh
Avatar
sympy is running forever on the matrices you sent
00:33
oof
Avatar
prob fraction too large
Avatar
unpickled admin bot 04/30/2023 12:36 AM
atp
00:36
the precision loss should matter
Avatar
pretty sure left and right data are correct
00:36
there's no precision loss in this fraction impl
Avatar
unpickled admin bot 04/30/2023 12:36 AM
this doesnt need to be that precise
Avatar
Avatar
sahuang
there's no precision loss in this fraction impl
unpickled admin bot 04/30/2023 12:36 AM
*shouldnt
00:36
just convert to np.float
00:36
since
Avatar
what if we just change fraction to double now
Avatar
unpickled admin bot 04/30/2023 12:36 AM
it gets within about ~.001 i think of every value
Avatar
unpickled admin bot 04/30/2023 12:37 AM
at least did for me when i tried doing a gradient descent
00:37
and
00:37
since the tensor for the flag was literally the output of ord on each char (edited)
00:37
it should be a roundable difference
00:37
i think
Avatar
[[ 50.42092424] [101.03269245] [ 26.71849702] [-36.88773564] [-76.61594413] [ 44.39960158] [ 68.71135152] [ 36.33306337] [ 4.41593796] [119.26109294] [ 58.48153408] [ 85.9846144 ] [ 41.13197924] [ 27.54137431] [ 32.35543824] [ 8.57326218] [ 22.02081603] [-24.8457248 ] [ 77.91928158] [ 68.94507925] [ 97.63596957] [ 47.37988401]]
00:41
makes more sense than what I got earlier
Avatar
Avatar
unpickled admin bot
it gets within about ~.001 i think of every value
unpickled admin bot 04/30/2023 12:42 AM
(point of what im saying here is just convert to floats)
Avatar
at least we see some 101
Avatar
Avatar
hfz
[[ 50.42092424] [101.03269245] [ 26.71849702] [-36.88773564] [-76.61594413] [ 44.39960158] [ 68.71135152] [ 36.33306337] [ 4.41593796] [119.26109294] [ 58.48153408] [ 85.9846144 ] [ 41.13197924] [ 27.54137431] [ 32.35543824] [ 8.57326218] [ 22.02081603] [-24.8457248 ] [ 77.91928158] [ 68.94507925] [ 97.63596957] [ 47.37988401]]
unpickled admin bot 04/30/2023 12:42 AM
for all of it? (edited)
00:42
i can check its MSE loss
00:42
but
00:42
not for intermediates
Avatar
yes, just the data that sahuang use, but had to convert to float64
Avatar
unpickled admin bot 04/30/2023 12:42 AM
no like
00:42
is that x?
00:42
in the whole thing (edited)
Avatar
unpickled admin bot 04/30/2023 12:43 AM
oh Ax=b solves the entire thing i think
00:43
owell
Avatar
Avatar
hfz
[[ 50.42092424] [101.03269245] [ 26.71849702] [-36.88773564] [-76.61594413] [ 44.39960158] [ 68.71135152] [ 36.33306337] [ 4.41593796] [119.26109294] [ 58.48153408] [ 85.9846144 ] [ 41.13197924] [ 27.54137431] [ 32.35543824] [ 8.57326218] [ 22.02081603] [-24.8457248 ] [ 77.91928158] [ 68.94507925] [ 97.63596957] [ 47.37988401]]
unpickled admin bot 04/30/2023 12:43 AM
do you have this as a comma seperated pair of values perhaps.....?
00:44
nvm
00:44
got it
Avatar
[50.42092424295905, 101.03269245149805, 26.718497020353276, -36.88773564060233, -76.61594412934443, 44.399601584157125, 68.71135152203541, 36.33306337098819, 4.415937955611899, 119.26109293913278, 58.48153408208583, 85.98461439875094, 41.131979235095756, 27.541374306218714, 32.35543824022369, 8.573262176253603, 22.02081603251875, -24.84572479902885, 77.91928157533383, 68.94507925073925, 97.63596957157239, 47.37988400755675]
00:44
oh ok
Avatar
unpickled admin bot 04/30/2023 12:48 AM
>>> MSE(model(a), target_output) tensor(1.5647e+11, dtype=torch.float64, grad_fn=<MseLossBackward0>) >>> better but also not close to generated output (edited)
Avatar
ill debug code tmr
Avatar
unpickled admin bot 04/30/2023 12:48 AM
(MSE is a measure of the difference in the output of model(a) and the output, more == bad) (edited)
Avatar
to see if its correct
00:51
[85, 77, 68, 67, 84, 70, 123, 110.35490848362011, 44.6688619549253, 113.98153306961444, 99.87441631578926, 67.80495409892433, 81.95437158738532, 104.56908932418375, 59.03581478285347, 40.20940259534488, 87.95837372483226, 35.07210921419155, 104.13904639866911, 120.39146783442436, 32.00000012927558, 125] how off is this?
00:51
@unpickled admin bot
Avatar
unpickled admin bot 04/30/2023 12:52 AM
MSE(model(tensor([85, 77, 68, 67, 84, 70, 123, 110.35490848362011, 44.6688619549253, 113.98153306961444, 99.87441631578926, 67.80495409892433, 81.95437158738532, 104.56908932418375, 59.03581478285347, 40.20940259534488, 87.95837372483226, 35.07210921419155, 104.13904639866911, 120.39146783442436, 32.00000012927558, 125], dtype=torch.float64)), target_output) tensor(1.5647e+11, dtype=torch.float64, grad_fn=<MseLossBackward0>) >>>
(edited)
Avatar
hm ok
Avatar
unpickled admin bot 04/30/2023 12:53 AM
what imma do tho
00:53
is put that as the base of a gradient descent
00:53
and see if it spits out something nice
Avatar
its weird because it should have unique solution if fraction has no precision loss
00:55
and if left, right are correct
00:55
might debug a bit tmr for this
Avatar
unpickled admin bot 04/30/2023 12:55 AM
i mean
00:55
also doesnt look like flag
00:56
so tmr
00:57
im going to generate an output with UMDCTF{...22 len} then with that output compute left and right and plugin this flag as input to see if ax=b
00:57
if they are the same then it means we just need to find the solution, else its just wrong
01:02
here i added test_flag and replaced output by test result
3.97 KB
01:02
LMAO
01:02
01:02
it works so
01:03
so this proves my left and right are correct
01:04
test_flag = "UMDCTF{test_flag_!!??}" assert len(test_flag) == 22 # W6(W5(W4(W3(W2(W1x+b1)+b2)+b3)+b4)+b5)+b6=output x = [] for i in range(len(test_flag)): x.append([Fraction(ord(test_flag[i]), 1)]) tmp = matmul(weights[0], x) tmp = matsum(tmp, biases[0]) tmp = matmul(weights[1], tmp) tmp = matsum(tmp, biases[1]) tmp = matmul(weights[2], tmp) tmp = matsum(tmp, biases[2]) tmp = matmul(weights[3], tmp) tmp = matsum(tmp, biases[3]) tmp = matmul(weights[4], tmp) tmp = matsum(tmp, biases[4]) tmp = matmul(weights[5], tmp) tmp = matsum(tmp, biases[5]) ys = tmp basically added this block before calculating
01:07
lcm is 1000000000000000000 we can just multiply all fractions by this
Avatar
unpickled admin bot 04/30/2023 1:07 AM
ye idk
01:07
rn im just
01:07
trying to gradient descend
01:09
tensor(1.1718, dtype=torch.float64, grad_fn=<MseLossBackward0>)
input_tensor tensor([ 84.6115, 75.5283, 68.5727, 67.4333, 84.9689, 70.4337, 123.9462, 109.9315, 44.2966, 113.6583, 99.2958, 68.9130, 82.1767, 105.0546, 58.9514, 42.0349, 88.8852, 34.9719, 103.6071, 120.5095, 33.2227, 125.3686], dtype=torch.float64, requires_grad=True) >>> a() 'ULECUF|n,rcERi;*Y#hy!}' >>>
running a gradient descend on it did not work
(edited)
Avatar
solved
Avatar
Avatar
sahuang
used /ctf solve
✅ Challenge solved.
Avatar
LMAO
Avatar
unpickled admin bot 04/30/2023 1:10 AM
How??????
Avatar
unpickled admin bot 04/30/2023 1:10 AM
sahuang 🛐
01:10
but also how
Avatar
so first the left and right are correct
Avatar
unpickled admin bot 04/30/2023 1:11 AM
was x bugged?
Avatar
left = ... right = ... # get lcm of all denominators from math import gcd from functools import reduce def lcm(a, b): return a * b // gcd(a, b) res = 1 for i in left: for j in i: res = lcm(res, j.denominator) for i in right: for j in i: res = lcm(res, j.denominator) print(res) # lcm # multiply all fractions by lcm to make int A = [] for i in left: A.append([j.numerator * res // j.denominator for j in i]) b = [] for i in right: b.append(i[0].numerator * res // i[0].denominator) print(A) print(b) # solve A * x = b, x is 8bit int from z3 import * x = [BitVec("x%d" % i, 8) for i in range(22)] s = Solver() for i in range(22): s.add(x[i] > 32) s.add(x[i] < 126) # start with UMDCTF{ and end with } s.add(x[0] == ord("U")) s.add(x[1] == ord("M")) s.add(x[2] == ord("D")) s.add(x[3] == ord("C")) s.add(x[4] == ord("T")) s.add(x[5] == ord("F")) s.add(x[6] == ord("{")) s.add(x[21] == ord("}")) # solve the system of equations for i in range(22): s.add(sum([A[i][j] * x[j] for j in range(22)]) - b[i] == 0) while s.check() == sat: m = s.model() print("".join([chr(int(eval(m[x[i]].as_string()))) for i in range(22)])) s.add(Or([x[i] != m[x[i]] for i in range(22)]))
01:11
z3 ftw
Avatar
unpickled admin bot 04/30/2023 1:11 AM
ohh
01:11
z3 too op
Avatar
as long as you make it int
01:17
its intended
01:17
author uses z3
01:17
the other solver used sage
01:17
which idk how
Avatar
unpickled admin bot 04/30/2023 1:17 AM
oh
Avatar
Avatar
sahuang
author uses z3
unpickled admin bot 04/30/2023 1:18 AM
sage stores stuff as fractions
01:18
and has all the matrix stuff inbuilt
01:18
they prob just.... solved
Avatar
yeah maybe too
01:18
but ig whole idea is correct
01:18
gotta rest
Avatar
unpickled admin bot 04/30/2023 1:18 AM
true
01:18
still kinda sad i burnt time on gradient descent
01:18
the author said it was possible so i hyperfocused it
Avatar
yeah idk
Avatar
unpickled admin bot 04/30/2023 1:18 AM
mb
Avatar
its precise so gradient descent will prob lose precision
Avatar
unpickled admin bot 04/30/2023 1:19 AM
its not that
01:19
its
01:19
gradient descent cant determine which input is the right output
01:19
thinking of the neural network as a really weak hash function, its like it gives me collisions of the flag (edited)
01:19
but not the flag
01:19
very sadge
Avatar
ah ok
01:20
yeah idk
01:22
from z3 import * layers = [[[], []]] data = open("./model.txt", "r").readlines() biases = False for l in data: l = l.strip() if "layer" in l: i = int(l[-1]) layers.append([[], []]) elif "biases" in l: biases = True elif "weights" in l: biases = False elif len(l) > 0: nums = [RealVal(x) for x in l.split(" ")] if biases: layers[i][1] += nums else: layers[i][0] += nums layer_data = [[]] for i in range(22): # input layer size layer_data[0].append(Int(str(i))) for i in range(1, len(layers)): layer_data.append([]) for j in range(len(layers[i][1])): n = RealVal(0) for k in range(len(layer_data[i - 1])): n += layer_data[i - 1][k] * layers[i][0][j * len(layer_data[i - 1]) + k] n += layers[i][1][j] layer_data[-1].append(n) outputs = [RealVal(x) for x in open("./outputs.txt").read().strip().split(" ")] s = Solver() for i in range(len(layer_data[-1])): s.add(layer_data[-1][i] == outputs[i]) for i in layer_data[0]: s.add(i > RealVal(32)) s.add(i < RealVal(128)) flag_header = [85,77,68,67,84,70,123] # UMDCTF{ for i in range(len(flag_header)): s.add(layer_data[0][i] == RealVal(flag_header[i])) s.check() print(s.model())
01:22
author solve script
01:22
didnt know z3 works w real val
01:22
also why he didnt calc the long matrix expr
01:23
did the z3 per layer
01:23
i see
01:23
interesting
01:24
frenchroomba used LLL
01:24
lol
01:24
no surprise
01:24
the joseph classic
Avatar
unpickled admin bot 04/30/2023 1:25 AM
LLL literally just solve challenge
01:25
idc what category you do
01:25
LLL
Avatar
yeah lmao
Exported 617 message(s)